Learning Obstacle Avoidance with an Operant Behavior Model

نویسندگان

  • Diego A. Gutnisky
  • B. Silvano Zanutto
چکیده

Artificial intelligence researchers have been attracted by the idea of having robots learn how to accomplish a task, rather than being told explicitly. Reinforcement learning has been proposed as an appealing framework to be used in controlling mobile agents. Robot learning research, as well as research in biological systems, face many similar problems in order to display high flexibility in performing a variety of tasks. In this work, the controlling of a vehicle in an avoidance task by a previously developed operant learning model (a form of animal learning) is studied. An environment in which a mobile robot with proximity sensors has to minimize the punishment for colliding against obstacles is simulated. The results were compared with the Q-Learning algorithm, and the proposed model had better performance. In this way a new artificial intelligence agent inspired by neurobiology, psychology, and ethology research is proposed.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Obstacle Avoidance by Means of an Operant Conditioning Model

This paper describes the Clpplication of a model of operant conditioning to the prohlcrn of obstacle avoidance \Vith a wheeled mobile robot. The main characteristic of the 8pplicd model is that the robot learns to avoid obstacles through a learning-by-doing cycle without external supervision. A series of ultrasonic sensors aetas Conditioned Stimuli (CS), while collisions act as an Unconditioned...

متن کامل

Dynamic Obstacle Avoidance by Distributed Algorithm based on Reinforcement Learning (RESEARCH NOTE)

In this paper we focus on the application of reinforcement learning to obstacle avoidance in dynamic Environments in wireless sensor networks. A distributed algorithm based on reinforcement learning is developed for sensor networks to guide mobile robot through the dynamic obstacles. The sensor network models the danger of the area under coverage as obstacles, and has the property of adoption o...

متن کامل

Neural competitive maps for reactive and adaptive navigation

We have recently introduced a neural network for reactive obstacle avoidance based on a model of classical and operant conditioning. In this article we describe the success of this model when implemented on two real autonomous robots. Our results show the promise of self-organizing neural networks in the domain of intelligent robotics.

متن کامل

Two - Factor Theory of Learning: Application to Maladaptive Behavior

Two-factor theory of avoidance remains one of the most infl uential theories of learning. It addresses a question of what works as a reinforcement of avoidance behavior, and proposes that: 1. an organism associates stimuli in the environment with aversive stimuli, and this allows these stimuli to evoke fear; 2. the avoidance response is reinforced by eliminating these warning stimuli or by esca...

متن کامل

Neuronal Architecture for Reactive and Adaptive Navigation of a Mobile Robot

A neural architecture that makes possible the integration of a kinematic adaptive neuro-controller for trajectory tracking and an obstacle avoidance adaptive neuro-controller is proposed for nonholonomic mobile robots. The kinematic adaptive neuro-controller is a real-time, unsupervised neural network that learns to control a nonholonomic mobile robot in a nonstationary environment, which is te...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Artificial life

دوره 10 1  شماره 

صفحات  -

تاریخ انتشار 2004